In this section we will begin the process of analysing the RNAseq in R. In the next section we will use DESeq2 (Love, Huber, and Anders 2014) for differential analysis. Before we do that we need to:
We will also look at the effects of normalisation for composition bias.
First, let’s load all the packages we will need to analyse the data.
library(tidyverse)
package ‘dplyr’ was built under R version 3.5.1
library(DESeq2)
The data for this tutorial comes from a Nature Cell Biology paper, EGF-mediated induction of Mcl-1 at the switch to lactation is essential for alveolar cell survival (Fu et al. 2015). The raw data (sequence reads) can be downloaded from SRA under SRP045534, and processed data (counts) can be downloaded from Gene Expression Omnibus database (GEO) under accession number GSE60450. Please see Supplementary material for instructions on downloading raw files from SRA and aligning fastq using HISAT2.
This study examines the expression profiles of basal stem-cell enriched cells (B) and committed luminal cells (L) in the mammary gland of virgin, pregnant and lactating mice. Six groups are present, with one for each combination of cell type and mouse status. Each group contains two biological replicates.
The sampleinfo file contains basic information about the samples that we will need for the analysis today.
# Read the sample information into a data frame
sampleinfo <- read_tsv("data/SampleInfo.txt")
sampleinfo
The raw reads were aligned using HISAT2 (Kim, Langmead, and Salzberg 2015) to the GRCm38 mouse reference genome from Ensembl. featureCounts (Liao, Smyth, and Shi 2014) was used to count reads against the Ensembl gene annotation and generate a counts matrix (as described in Section 1).
First we need to read the data into R from the file in the data directory.
# Read the data into R
seqdata <- read_tsv("data/GSE60450_Lactation.featureCounts", comment = "#")
seqdata
In the seqdata object each row represents a gene. The columns contains:
1 Geneid - Ensembl ID
2-5 Chr, Start, End, Strand - Genomic locations of exons of the gene
6 Length - Transcript length of the gene
7-18 One column for each sample with the count of how many reads were assigned
to each gene by featureCounts.
dplyr…
We will be manipulating and reformating the counts matrix into a suitable format for DESeq2.
The seqdata is a tibble (a kind of dataframe used by dplyr) in which the first six columns contain annotation information and the remaining columns contain the count data.
DESeq2 requires a simple object containing only the count data, we’ll keep the gene ID by setting them as the row names.
Let’s create new counts data object, countdata, that contains only the counts for the 12 samples.
Our sampleinfo object contains a column with the sample names. We should adjust the column names of our matrix to match them - we just need to remove the .bam suffix.
It is also critical to ensure that the samples in the columns are in the same order as the rows of sampleinfo. When we load these objects into DESeq2 for the analysis it will not guess which row of the sampleinfo belongs to which column of the counts matrix, it will assume the same order.
# tibbles aren't allowed rownames so first we need to convert to a dataframe
countdata <- as.data.frame(seqdata) %>%
column_to_rownames("Geneid") %>% # turn the geneid column into rownames
rename_all(str_remove, ".bam") %>% # remove the ".bam" from the column names
select(sampleinfo$Sample) %>% # keep sample columns using sampleinfo
as.matrix()
head(countdata)
MCL1.DG MCL1.DH MCL1.DI MCL1.DJ MCL1.DK MCL1.DL MCL1.LA
ENSMUSG00000102693 0 0 0 0 0 0 0
ENSMUSG00000064842 0 0 0 0 0 0 0
ENSMUSG00000051951 665 446 86 361 559 447 0
ENSMUSG00000102851 0 0 0 0 0 0 0
ENSMUSG00000103377 0 0 0 0 1 0 0
ENSMUSG00000104017 0 0 0 0 0 0 0
MCL1.LB MCL1.LC MCL1.LD MCL1.LE MCL1.LF
ENSMUSG00000102693 0 0 0 0 0
ENSMUSG00000064842 0 0 0 0 0
ENSMUSG00000051951 0 0 0 2 0
ENSMUSG00000102851 0 0 0 0 0
ENSMUSG00000103377 0 0 0 0 0
ENSMUSG00000104017 0 0 0 0 0
Here, we used str_remove to remove the unwanted suffix from the column names. The stringr package has a lots of useful functions for manipulating strings (text), e.g. str_replace or str_extract.
For many analysis methods it is advisable to filter out as many genes as possible prior to starting the analysis in order to decrease the impact on fasle discovery rates when applying multiple testing correction. This is normally done by filtering out genes with low numbers of reads, which are likely to be uninformative.
With DESeq this is not necessary as it applies a process it calls independent filtering during the analysis process. On the other hand, some filtering for genes that are very lowly expressed does reduce the size of the data matrix, meaning that less memory is required and processing steps are carried out faster.
We will keep all genes where the total number of reads across all samples is greater than 5.
dim(countdata)
[1] 45513 12
keep <- rowSums(countdata) > 5
countdata <- countdata[keep,]
dim(countdata)
[1] 22013 12
Before moving on to doing the actually differential expression analysis it important do assess the quality of our data.
First, we can plot how many reads we have for each sample. Whilst normalisation can account for imbalance in coverage across the samples, extreme differences may be indicative of underlying problems in the samples.
librarySizes <- countdata %>%
as.data.frame() %>%
summarise_all(sum) %>%
gather("Sample", "Counts")
The ggplot2 package has emerged as an attractive alternative to the traditional plots provided by base R. A full overview of all capabilities of the package is available from the cheatsheet.
In brief:-
librarySizes is our data frame containing the variables we wish to plotaes creates a mapping between the variables in our data frame to the aesthetic proprties of the plot:
SampleCountsgeom_bar specifies the particular type of plot we want (in this case a bar plot)
The real advantage of ggplot2 is the ability to change the appearance of our plot by mapping other variables to aspects of the plot. For example, we could colour the points based on a p-value cut-off. The colours are automatically chosen by ggplot2, but we can specifiy particular values.
ggplot(librarySizes, aes(x=Sample, y=Counts)) +
geom_bar(stat="identity", col="black", fill="steelblue")
Count data is not normally distributed, so if we want to examine the distributions of the raw counts it is helpful to transform the data on to a log scale. Typically we use a log2 transformation, however, because the data is count data and will contain many 0s we need to add a count of 1 to every value in order to prevent attempting log2(0) from creating errors.
We’ll check the distribution of read counts using a boxplot and well add some colour to see if there is any difference between sample groups
# Get log2 counts per million
logcounts <- log2(countdata + 1)
# lets add the gene ids
rownames(logcounts) <- rownames(countdata)
# Check distributions of samples using boxplots
# Add a horizontal line at the overall median
logcounts %>%
as.data.frame() %>%
gather("Sample", "logCounts") %>%
left_join(sampleinfo, "Sample") %>%
ggplot(aes(x=Sample, y=logCounts, fill=Status)) +
geom_hline(yintercept = median(logcounts), colour = "blue", size=2) +
geom_boxplot()
From the boxplots we see that overall the density distributions of raw log-intensities are not identical but still not very different. If a sample is really far above or below the blue horizontal line we may need to investigate that sample further.
A principle components analysis (PCA) is an example of an unsupervised analysis, where we don’t specify the grouping of the samples. If your experiment is well controlled and has worked well, we should that replicate samples cluster closely, whilst the greatest sources of variation in the data should be between treatments/sample groups. It is also an incredibly useful tool for checking for outliers and batch effects.
To run the PCA we should first normalise our data for library size and transform to a log scale.DESeq2 provides two commands that can be used to do this, here we will use the command rlog. rlog performs a log2 scale transformation in a way that compensates for differences between samples for genes with low read count and also normalizes between samples for library size.
You can read more about rlog, it’s alternative vst and the comparison between the two here.
To plot the PCA results we will use the autoplot function from the ggfortify package (Tang, Horikoshi, and Li 2016). ggfortify is built on top of ggplot2 and is able to recognise common statistical objects such as PCA results or linear model results and automatically generate summary plot of the results in an appropriate manner.
library(ggfortify)
rlogcounts <- rlog(countdata)
# run PCA
pcDat <- prcomp(t(rlogcounts))
# plot PCA
autoplot(pcDat)
# Lets add colour to look at the clustering for Status
autoplot(pcDat,
data = sampleinfo,
colour="Status",
size=5)
# and now status
# Lets add colour to look at the clustering for Cell Type
autoplot(pcDat,
data = sampleinfo,
colour="CellType",
size=5)
# We could use shape for one of the factors
autoplot(pcDat,
data = sampleinfo,
colour="Status",
shape="CellType",
size=5)
# Specify some clearer shapes to use that have a black outline and use fill
autoplot(pcDat,
data = sampleinfo,
fill="Status",
shape="CellType",
size=5) +
scale_shape_manual(values=c(21, 24)) +
guides(fill = guide_legend(override.aes=list(shape=22)))
Discussion
Look at the last PCA plot. What is the greatest source of variation? Is there something strange going on with the samples? Let’s identify these samples:
# setting shape to FALSE causes the plot to default to using the labels
autoplot(pcDat,
data = sampleinfo,
colour="CellType",
shape=FALSE,
label.size=6)
The mislabelled samples are MCL1.DG, which is labelled as luminal but should be basal, and MCL1.LA, which is labelled as basal but should be luminal. Let’s fix the sample sheet and save the new version
sampleinfo <- sampleinfo %>%
mutate(CellType=ifelse(Sample=="MCL1.DG", "basal", CellType)) %>%
mutate(CellType=ifelse(Sample=="MCL1.LA", "luminal", CellType)) %>%
mutate(Group=ifelse(Sample=="MCL1.DG", "basal.virgin", Group)) %>%
mutate(Group=ifelse(Sample=="MCL1.LA", "luminal.virgin", Group))
write_csv(sampleinfo, "data/SampleInfo_Corrected.txt")
Let’s look at the PCA now.
autoplot(pcDat,
data = sampleinfo,
fill="Status",
shape="CellType",
size=5) +
scale_shape_manual(values=c(21, 24)) +
guides(fill = guide_legend(override.aes=list(shape=22)))
Discussion
What is the greatest source of variation in the data (i.e. what does dimension 1 represent)? What is the second greatest source of variation in the data?
Replicate samples from the same group cluster together in the plot, while samples from different groups form separate clusters. This indicates that the differences between groups are larger than those within groups, i.e., differential expression is greater than the variance and can be detected. The differences between virgin, pregnant and lactating are greater for luminal cells than for basal.
Clustering in the PCA plot can be used to motivate changes to the design matrix in light of potential batch effects. For example, imagine that the first replicate of each group was prepared at a separate time from the second replicate. If the PCA plot showed separation of samples by time, it might be worthwhile including time in the down stream analysis to account for the time-based effect.
autoplot plots the first two dimensions as a default, however you can plot higher dimensions using the x and y arguments.
autoplot(pcDat,
data = sampleinfo,
fill = "Status",
shape = "CellType",
size = 5,
x = 2,
y = 3) +
scale_shape_manual(values=c(21, 24)) +
guides(fill = guide_legend(override.aes=list(shape=22)))
Another alternative is to generate a Multidimensional scaling (MDS) plot. MDS is similar to PCA, in MDS the distance between each pair of samples in the MDS plot is calculated as the ‘leading fold change’, which is defined as the root-mean-square of the largest 500 log2-fold changes between that pair of samples. The Glimma package creates interactive plots that allow the use to explore the different dimensions.
library(Glimma)
glMDSPlot(rlogcounts,
labels = sampleinfo$Sample,
groups = sampleinfo[,c("CellType", "Status")],
folder = "mds")
Glimma was created to make interactive versions of some of the popular plots from the limma package. At present it can be used to obtain MDS plots. The output of glMDSPlot is an html page (/mds/MDS-Plot.html) that shows the MDS plot on the left, and the amount of variation explained by each dimension in a barplot on the right. The user can hover over points to find out sample information, and switch between successive dimensions in the MDS plot by clicking on the bars in the barplot. The default MDS plots shows dimensions 1 and 2.
An alternative to PCA plots for examining relationships between samples is using hierarchical clustering. Heatmaps are a nice visualisation to examine hierarchical clustering of your samples. We can do this using the heatmap.2 function from the gplots package. In this example heatmap.2 calculates a matrix of euclidean distances from the logcounts object.
The RColorBrewer package has nicer colour schemes, accessed using the brewer.pal function. “RdYlBu” is a common choice, and “Spectral” is also nice.
Note:The png function will create a png file to save the plots created straight after, and will close this file when dev.off() is called. To see your plots interactively, simply omit those two lines.
We don’t want to plot a heatmap of all 22013 genes, so let’s select data for the 500 most variable genes and plot the heatmap.
# We estimate the variance for each row in the logcounts matrix
var_genes <- apply(rlogcounts, 1, var)
head(var_genes)
[1] 7.4029766 0.1582180 0.7452796 1.0926386 0.1164083 0.9256107
# Get the row numbers for the top 500 most variable genes
select_var <- order(var_genes, decreasing=TRUE)[1:500]
# Subset logcounts matrix
hmData <- rlogcounts[select_var,]
library(gplots)
library(RColorBrewer)
# Get some nicer colours
mypalette <- brewer.pal(11, "RdYlBu")
# http://colorbrewer2.org/#type=sequential&scheme=BuGn&n=3
morecols <- colorRampPalette(mypalette)
# Set up colour vector for celltype variable
col.cell <- c("purple","orange")[sampleinfo$CellType]
# Plot the heatmap
heatmap.2(hmData,
col=rev(morecols(50)),
trace="column",
main="Top 500 most variable genes across samples",
ColSideColors=col.cell,scale="row")
Additional Challenge 3
Redo the heatmap using the top 500 LEAST variable genes.
Change the colour scheme to “PiYG” and redo the heatmap. Try?RColorBrewerand see what other colour schemes are available.
Change the sample names togroupusing thelabColargument
Remove the gene names from the righthand side of the plot usinglabRow
Solution
Next we’ll create a DESeqDataSet object. This is an object used by DESeq2 to store count data. It has a number of slots for storing count data, sample information, the model design for the differential expression analysis, and various other parameters about the data.
In the simplest form we need to provide counts, sample information and a design formula. For now we just want to perform some QC so we will provide a simple model where would just be concerned with contrasting the differential expression between cell types. We can change the model later if we want to.
# first lets check that our rows and columns match
all(sampleinfo$Sample == colnames(countdata))
[1] TRUE
# create the design formula
design <- as.formula(~ CellType)
# create the DESeqDataSet object
ddsObj <- DESeqDataSetFromMatrix(countData = countdata,
colData = sampleinfo,
design = design)
some variables in design formula are characters, converting to factors
The estimateSizeFactors command of DESeq2 applies the “median ratio method” of normalisation(Huber 2010). This generates a set of normalization factors and adds them to the ddsObj in the colcolData slot.
# Apply normalisation to DDS object
ddsObj <- estimateSizeFactors(ddsObj)
Take a look at the normalisation factors for these samples.
ddsObj@colData$sizeFactor
MCL1.DG MCL1.DH MCL1.DI MCL1.DJ MCL1.DK MCL1.DL MCL1.LA MCL1.LB
1.3001316 1.1931245 1.2176601 1.0775276 0.9804917 0.9739114 1.2819584 1.3645962
MCL1.LC MCL1.LD MCL1.LE MCL1.LF
1.0348558 0.9240333 0.5722305 0.5795097
A normalization factor below one indicates that the library size will be scaled down, as there is more suppression (i.e., composition bias) in that library relative to the other libraries. This is also equivalent to scaling the counts upwards in that sample. Conversely, a factor above one scales up the library size and is equivalent to downscaling the counts.
The MCL1.LE and MCL1.LF have much smaller normalisation factors, and MCL1.LA and MCL1.LB have the largest. If we plot MAplots using the plotMA in limma [Ritchie2015] function for these samples, we should be able to see the composition bias problem.
The rlog data (logcounts) has already been normalised for both library size and composition bias. Let’s have a look at the raw countsdata; we will need to log2 scale it first.
library(limma)
par(mfrow=c(1,2))
plotMA(logcounts, array = 7)
abline(h=0,col="grey")
plotMA(logcounts, array = 11)
abline(h=0,col="grey")
The MA plots show average expression (mean: x-axis) against log-fold-changes (difference: y-axis). Because our ddsObj object contains the normalisation factors, if we redo these plots using ddsObj, we should see the composition bias problem has been solved.
normalizedCounts <- counts(ddsObj, normalized=TRUE)
logNormalizedCounts <- log2(normalizedCounts + 1)
par(mfrow=c(1,2))
plotMA(logNormalizedCounts, array = 7)
abline(h=0,col="grey")
plotMA(logNormalizedCounts, array = 11)
abline(h=0,col="grey")
Challenge 4
Plot the biased and unbiased MA plots side by side for the same sample to see the before and after normalisation.
Solution
We can save a few data objects to use later so we don’t have to rerun everything
save(countdata, sampleinfo, file="results/preprocessing.Rdata")
Fu, Nai Yang, Anne C Rios, Bhupinder Pal, Rina Soetanto, Aaron T L Lun, Kevin Liu, Tamara Beck, et al. 2015. “EGF-mediated induction of Mcl-1 at the switch to lactation is essential for alveolar cell survival.” Nature Cell Biology 17 (4): 365–75. doi:10.1038/ncb3117.
Huber, Wolfgang. 2010. “Differential Expression Analysis for Sequence Count Data” 11 (March).
Kim, Daehwan, Ben Langmead, and Steven L. Salzberg. 2015. “HISAT: A Fast Spliced Aligner with Low Memory Requirements.” Nature Methods 12 (March). Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved. SN -: 357 EP. http://dx.doi.org/10.1038/nmeth.3317.
Liao, Yang, Gordon K Smyth, and Wei Shi. 2014. “featureCounts: an efficient general purpose program for assigning sequence reads to genomic features.” Bioinformatics (Oxford, England) 30 (7): 923–30. doi:10.1093/bioinformatics/btt656.
Love, Michael I., Wolfgang Huber, and Simon Anders. 2014. “Moderated Estimation of Fold Change and Dispersion for Rna-Seq Data with Deseq2.” Genome Biology 15 (12): 550. doi:10.1186/s13059-014-0550-8.
Tang, Yuan, Masaaki Horikoshi, and Wenxuan Li. 2016. “Ggfortify: Unified Interface to Visualize Statistical Result of Popular R Packages.” The R Journal 8 (2). https://journal.r-project.org/.